4 research outputs found
Spectral Properties of Structured Kronecker Products and Their Applications
We study certain spectral properties of some fundamental matrix functions of pairs of symmetric matrices. Our study includes eigenvalue inequalities and various interlacing properties of eigenvalues.
We also discuss the role of interlacing in inverse eigenvalue problems for structured matrices.
Interlacing is the main ingredient of many fundamental eigenvalue inequalities. This thesis also recounts a historical development of the eigenvalue inequalities relating the sum of two matrices to its summands with some recent findings motivated by problems arising in compressed sensing.
One of the fundamental matrix functions on pairs of matrices is the Kronecker product. It arises in many fields such as image processing, signal processing, quantum information theory, differential equations and semidefinite optimization. Kronecker products enjoy useful algebraic properties that have proven to be useful in applications. The less-studied symmetric Kronecker product and skew-symmetric Kronecker product (a contribution of this thesis) arise in semidefinite optimization. This thesis focuses on certain interlacing and eigenvalue inequalities of structured Kronecker products in the context of semidefinite optimization.
A popular method used in semidefinite optimization is the primal-dual interior point path following algorithms. In this framework, the Jordan-Kronecker products arise naturally in the computation of Newton search direction. This product also appears in many linear matrix equations, especially in control theory. We study the properties of this product and present some nice algebraic relations. Then, we revisit the symmetric Kronecker product and present its counterpart the skew-symmetric Kronecker product with its basic properties. We settle the conjectures posed by Tunçel and Wolkowicz, in 2003, on interlacing properties of eigenvalues of the Jordan-Kronecker product and inequalities relating the extreme eigenvalues of the Jordan-Kronecker product. We disprove these conjectures in general, but we also identify large classes of matrices for which the interlacing properties hold. Furthermore, we present techniques to generate classes of matrices for which these conjectures fail. In addition, we present a generalization of the Jordan-Kronecker product (by replacing the transpose operator with an arbitrary symmetric involution operator). We study its spectral structure in terms of eigenvalues and eigenvectors and show that the generalization enjoys similar properties of the Jordan-Kronecker product. Lastly, we propose a related structure, namely Lie-Kronecker products and characterize their eigenvectors
Linear MMSE-Optimal Turbo Equalization Using Context Trees
Formulations of the turbo equalization approach to iterative equalization and
decoding vary greatly when channel knowledge is either partially or completely
unknown. Maximum aposteriori probability (MAP) and minimum mean square error
(MMSE) approaches leverage channel knowledge to make explicit use of soft
information (priors over the transmitted data bits) in a manner that is
distinctly nonlinear, appearing either in a trellis formulation (MAP) or inside
an inverted matrix (MMSE). To date, nearly all adaptive turbo equalization
methods either estimate the channel or use a direct adaptation equalizer in
which estimates of the transmitted data are formed from an expressly linear
function of the received data and soft information, with this latter
formulation being most common. We study a class of direct adaptation turbo
equalizers that are both adaptive and nonlinear functions of the soft
information from the decoder. We introduce piecewise linear models based on
context trees that can adaptively approximate the nonlinear dependence of the
equalizer on the soft information such that it can choose both the partition
regions as well as the locally linear equalizer coefficients in each region
independently, with computational complexity that remains of the order of a
traditional direct adaptive linear equalizer. This approach is guaranteed to
asymptotically achieve the performance of the best piecewise linear equalizer
and we quantify the MSE performance of the resulting algorithm and the
convergence of its MSE to that of the linear minimum MSE estimator as the depth
of the context tree and the data length increase.Comment: Submitted to the IEEE Transactions on Signal Processin